[Research] Stop-Gap: An Emergent Process and Expansive Term for Imputation in Explaining Hallucinations
Introduction
Hallucinations within AI models refer to instances where the model's outputs are not aligned with the input data. This phenomenon can lead to unexpected and often incorrect results. This documentation explores why hallucinations occur, delving into the connection between various terms such as "perplexity," "imputation," and "stop-gap." The term "stop-gap" is used here as a shorthand explanation for what may be happening during the sampling process within a model, and this document aims to elucidate potential expressions, interpretations, and misunderstandings related to these concepts.
Hallucinations and Perplexity
Hallucination
The term "hallucination" often describes a sensory perception (such as a visual image or sound) that occurs in the absence of an actual external s…
( 13
min )